163 research outputs found

    Constacyclic Codes over Finite Fields

    Get PDF
    An equivalence relation called isometry is introduced to classify constacyclic codes over a finite field; the polynomial generators of constacyclic codes of length â„“tps\ell^tp^s are characterized, where pp is the characteristic of the finite field and â„“\ell is a prime different from pp

    The Devil of Face Recognition is in the Noise

    Full text link
    The growing scale of face recognition datasets empowers us to train strong convolutional networks for face recognition. While a variety of architectures and loss functions have been devised, we still have a limited understanding of the source and consequence of label noise inherent in existing datasets. We make the following contributions: 1) We contribute cleaned subsets of popular face databases, i.e., MegaFace and MS-Celeb-1M datasets, and build a new large-scale noise-controlled IMDb-Face dataset. 2) With the original datasets and cleaned subsets, we profile and analyze label noise properties of MegaFace and MS-Celeb-1M. We show that a few orders more samples are needed to achieve the same accuracy yielded by a clean subset. 3) We study the association between different types of noise, i.e., label flips and outliers, with the accuracy of face recognition models. 4) We investigate ways to improve data cleanliness, including a comprehensive user study on the influence of data labeling strategies to annotation accuracy. The IMDb-Face dataset has been released on https://github.com/fwang91/IMDb-Face.Comment: accepted to ECCV'1

    NeU-NBV: Next Best View Planning Using Uncertainty Estimation in Image-Based Neural Rendering

    Full text link
    Autonomous robotic tasks require actively perceiving the environment to achieve application-specific goals. In this paper, we address the problem of positioning an RGB camera to collect the most informative images to represent an unknown scene, given a limited measurement budget. We propose a novel mapless planning framework to iteratively plan the next best camera view based on collected image measurements. A key aspect of our approach is a new technique for uncertainty estimation in image-based neural rendering, which guides measurement acquisition at the most uncertain view among view candidates, thus maximising the information value during data collection. By incrementally adding new measurements into our image collection, our approach efficiently explores an unknown scene in a mapless manner. We show that our uncertainty estimation is generalisable and valuable for view planning in unknown scenes. Our planning experiments using synthetic and real-world data verify that our uncertainty-guided approach finds informative images leading to more accurate scene representations when compared against baselines.Comment: Accepted to IEEE/RSJ International Conference on Robotics and Intelligent Systems (IROS) 202

    Improving image quality in compressed ultrafast photography with a space- and intensity-constrained reconstruction algorithm

    Get PDF
    The single-shot compressed ultrafast photography (CUP) camera is the fastest receive-only camera in the world. In this work, we introduce an external CCD camera and a space- and intensity-constrained (SIC) reconstruction algorithm to improve the image quality of CUP. The CCD camera takes a time-unsheared image of the dynamic scene. Unlike the previously used unconstrained algorithm, the proposed algorithm incorporates both spatial and intensity constraints, based on the additional prior information provided by the external CCD camera. First, a spatial mask is extracted from the time-unsheared image to define the zone of action. Second, an intensity threshold constraint is determined based on the similarity between the temporally projected image of the reconstructed datacube and the time-unsheared image taken by the external CCD. Both simulation and experimental studies showed that the SIC reconstruction improves the spatial resolution, contrast, and general quality of the reconstructed image

    Ultrafast imaging of light scattering dynamics using second-generation compressed ultrafast photography

    Get PDF
    We present single-shot real-time video recording of light scattering dynamics by second-generation compressed ultrafast photography (G2-CUP). Using G2-CUP at 100 billion frames per second, in a single camera exposure, we experimentally captured the evolution of the light intensity distribution in an engineered thin scattering plate assembly. G2-CUP, which implements a new reconstruction paradigm and a more efficient hardware design than its predecessors, markedly improves the reconstructed image quality. The ultrafast imaging reveals the instantaneous light scattering pattern as a photonic Mach cone. We envision that our technology will find a diverse range of applications in biomedical imaging, materials science, and physics

    Dual-view photoacoustic microscopy for quantitative cell nuclear imaging

    Get PDF
    Optical-resolution photoacoustic microscopy (OR-PAM) is an emerging imaging modality for studying biological tissues. However, in conventional single-view OR-PAM, the lateral and axial resolutions—determined optically and acoustically, respectively—are highly anisotropic. In this Letter, we introduce dual-view OR-PAM to improve axial resolution, achieving three-dimensional (3D) resolution isotropy. We first use 0.5 μm polystyrene beads and carbon fibers to validate the resolution isotropy improvement. Imaging of mouse brain slices further demonstrates the improved resolution isotropy, revealing the 3D structure of cell nuclei in detail, which facilitates quantitative cell nuclear analysis
    • …
    corecore